api request
AutoFeedback: An LLM-based Framework for Efficient and Accurate API Request Generation
Liu, Huanxi, Liao, Jiaqi, Feng, Dawei, Xu, Kele, Wang, Huaimin
Large Language Models (LLMs) leverage external tools primarily through generating the API request to enhance task completion efficiency. The accuracy of API request generation significantly determines the capability of LLMs to accomplish tasks. Due to the inherent hallucinations within the LLM, it is difficult to efficiently and accurately generate the correct API request. Current research uses prompt-based feedback to facilitate the LLM-based API request generation. However, existing methods lack factual information and are insufficiently detailed. To address these issues, we propose AutoFeedback, an LLM-based framework for efficient and accurate API request generation, with a Static Scanning Component (SSC) and a Dynamic Analysis Component (DAC). SSC incorporates errors detected in the API requests as pseudo-facts into the feedback, enriching the factual information. DAC retrieves information from API documentation, enhancing the level of detail in feedback. Based on this two components, Autofeedback implementes two feedback loops during the process of generating API requests by the LLM. Extensive experiments demonstrate that it significantly improves accuracy of API request generation and reduces the interaction cost. AutoFeedback achieves an accuracy of 100.00\% on a real-world API dataset and reduces the cost of interaction with GPT-3.5 Turbo by 23.44\%, and GPT-4 Turbo by 11.85\%.
- Asia > China > Hunan Province > Changsha (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Li, Chenliang, Chen, Hehong, Yan, Ming, Shen, Weizhou, Xu, Haiyang, Wu, Zhikai, Zhang, Zhicheng, Zhou, Wenmeng, Chen, Yingda, Cheng, Chen, Shi, Hongzhu, Zhang, Ji, Huang, Fei, Zhou, Jingren
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
Deploy any ML Model to Any Cloud Platform
Model serving isn't just a hard problem, it's a hard problem that constantly demands new solutions. Model serving, as part of MLOps, is the DevOps challenge of keeping a complicated, fragile artifact (the model) working in multiple dynamic environments. As frameworks are built and updated for training models, and production environments evolve for new capabilities and constraints, data scientists have to reimplement model serving scripts and rebuild model deployment processes. Data scientists working in large, well-resourced organizations can hand off their models to specialized MLOps teams for serving and deployment. But for those of us working at start-ups and newer companies, like I did for the first decade of my career, we had to handle the ML deployment challenge ourselves.
Railyard: how we rapidly train machine learning models with Kubernetes
Stripe uses machine learning to respond to our users' complex, real-world problems. Machine learning powers Radar to block fraud, and Billing to retry failed charges on the network. Stripe serves millions of businesses around the world, and our machine learning infrastructure scores hundreds of millions of predictions across many machine learning models. These models are powered by billions of data points, with hundreds of new models being trained each day. Over time, the volume, quality of data, and number of signals have grown enormously as our models continuously improve in performance.
What is the GAI token (GraphGrail AI)? – Graph Grail AI – Medium
A token is a unit of account in the blockchain network used to represent the digital balance of a certain asset. Accounting for tokens is based on technologies that are available through special applications using electronic signature algorithms. In the world of cryptocurrencies, tokens are called electronic units that are issued for the following tasks -- sale of shares, lending and monetization of additional services for network users, lending, storage of value as shares, attraction of financing through the creation of decentralized blockchain networks and functional utility within platforms. The main purpose of utility tokens is to pay for internal network services of a particular project. Their availability allows the user to access additional functions, features and capabilities in a decentralized network.